Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Bioengineering (Basel) ; 11(3)2024 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-38534488

RESUMO

The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.

2.
J Med Imaging (Bellingham) ; 10(3): 034002, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37274759

RESUMO

Purpose: Image registration is a very common procedure in dental applications for aligning images. Registration between pairs of images taken from different angles can improve diagnosis. Our study presents an edge-enhanced unsupervised deep learning (DL)-based deformable registration framework for aligning two-dimensional (2D) pairs of dental x-ray images. Approach: The proposed neural network is based on the combination of a U-Net like structure, which produces a displacement field, combined with spatial transformer networks, which produce the transformed image. The proposed structure is trained end-to-end by minimizing a weighted loss function consisting of three parts corresponding to image similarity, edge similarity, and registration restrictions. In this regard, the proposed edge specific loss enhances the unsupervised training of the registration framework without the need of supervision through anatomical structures. Results: The proposed framework was applied to two datasets, a set of 104 x-ray images of mandibles, arranged in 2600 pairs for training and testing and a set of 17 pairs of pre- and post-operative reconstructed panoramic images. The proposed model outperformed both conventional registration methods and DL-based techniques for both qualitative and quantitative assessment, in most of the compared metrics concerning intensity similarity and edge distances. Conclusions: The proposed framework achieved accurate and fast deformable alignment of pairs of 2D dental radiographic images. The edge-based module of the loss function enhances the unsupervised learning by directing the network toward deformations that take into consideration the edges of the depicted objects (teeth, bone, and tissue), which are crucial in diagnosis.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37015600

RESUMO

Metastatic Melanoma (MM) is an aggressive type of cancer which produces metastases throughout the body with very poor survival rates. Recent advances in immunotherapy have shown promising results for controlling disease's progression. Due to the often rapid progression, fast and accurate diagnosis and treatment response assessment is vital for the whole patient management. These procedures prerequisite accurate, whole-body tumor identification. This can be offered by the imaging modality Positron Emission Tomography (PET)/Computed Tomography (CT) with the radiotracer F 18-Fluorodeoxyglucose (FDG). However, manual segmentation of PET/CT images is a very time-consuming and labor intensive procedure that requires expert knowledge. Most of the previously published segmentation techniques focus on a specific type of tumor or part of the body and require a great amount of manually labeled data, which is, however, difficult for MM. Multimodal analysis of PET/CT is also crucial because FDG-PET contains only the functional information of tumors which can be complemented by the anatomical information of CT. In this paper, we propose a whole-body segmentation framework capable of efficiently identifying the highly heterogeneous tumor lesions of MM from the whole-body 3D FDG-PET/CT images. The proposed decision support system begins with an Ensemble Unsupervised Segmentation of regions of high FDG-uptake based on Fuzzy C-means and a custom region growing algorithm. Then, a region classification model based on radiomics features and Neural Networks classifies these regions as tumors or not. Experimental results showed high performance in the identification of MM lesions with Sensitivity 83.68%, Specificity 91.82%, F1-score 75.42%, AUC 94.16% and Balanced accuracy 87.75% which were also supported by the public dataset evaluation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...